Qwen3 30B A6B 16 Extreme 128k Context
A fine-tuned version of the Qwen3-30B-A3B mixture of experts model, with activated experts increased to 16 and context window expanded to 128k, suitable for complex reasoning scenarios
Large Language Model
Transformers